Learning parameters and structure of Bayesian networks using an Implicit framework
نویسندگان
چکیده
A large amount of work has been done in the last ten years on learning parameters and structure in Bayesian networks (BNs) (see for example Neapolitan, 2005). Within the classical Bayesian framework, learning parameters in BNs is based on priors; a prior distribution of the parameters (prior conditional probabilities) is chosen and a posterior distribution is then derived given the data and priors, using different estimations procedures (for example Maximum a posteriori (MAP) or Maximum likelihood (ML),...). The Achille’s heal of the Bayesian framework resides in the choice of priors. Defenders of the Bayesian approach argue that using priors is, in contrary, the strength of this approach because it is an intuitive way to take into account the available or experts knowledge on the problem. On the other side, contradictors of the Bayesian paradigm have claimed that the choice of a prior is meaningless and unjustified in the absence of prior knowledge and that different choices of priors may not lead to the same estimators. In this context, the choice of priors for learning parameters in BNs has remained problematic and a controversial issue, although some studies have claimed that the sensitivity to priors is weak when the learning database is large. Another important issue in parameter learning in BNs is that the learning datasets are seldom complete and one have to deal with missing observations. Inference with missing data is an old problem in statistics and several solutions have been proposed in the last three decades starting from the pioneering work of (Dempster et al., 1977). These authors proposed a famous algorithm that iterates, until convergence towards stationary point, between two steps, one called Expectation or E-step in which the expected values of the missing data are inferred from the current model parameter configuration and the other, called Maximization or Mstep, in which we look for and find the parameter values that maximize a probability function (e.g. likelihood). This algorithm, known as the Expectation-Maximization (or EM) algorithm has become a routine technique for parameters estimation in statistical models with missing data in a wide range of applications. Lauritzen, (1995) described how to apply the EM algorithm to learn parameters for known structure BNs using either Maximum-Likelihood (ML) or maximum a posteriori (MAP) estimates (so called EM-MAP) (McLachlan et al., 1997). Learning structure (graphical structure of conditional dependencies) in BNs is a much more complicated problem that can be formally presented in classical statistics as a model selection problem. In fact, it was shown that learning structure from data is an NP-hard problem 1
منابع مشابه
An Introduction to Inference and Learning in Bayesian Networks
Bayesian networks (BNs) are modern tools for modeling phenomena in dynamic and static systems and are used in different subjects such as disease diagnosis, weather forecasting, decision making and clustering. A BN is a graphical-probabilistic model which represents causal relations among random variables and consists of a directed acyclic graph and a set of conditional probabilities. Structure...
متن کاملA Surface Water Evaporation Estimation Model Using Bayesian Belief Networks with an Application to the Persian Gulf
Evaporation phenomena is a effective climate component on water resources management and has special importance in agriculture. In this paper, Bayesian belief networks (BBNs) as a non-linear modeling technique provide an evaporation estimation method under uncertainty. As a case study, we estimated the surface water evaporation of the Persian Gulf and worked with a dataset of observations ...
متن کاملA Surface Water Evaporation Estimation Model Using Bayesian Belief Networks with an Application to the Persian Gulf
Evaporation phenomena is a effective climate component on water resources management and has special importance in agriculture. In this paper, Bayesian belief networks (BBNs) as a non-linear modeling technique provide an evaporation estimation method under uncertainty. As a case study, we estimated the surface water evaporation of the Persian Gulf and worked with a dataset of observations ...
متن کامل Structure Learning in Bayesian Networks Using Asexual Reproduction Optimization
A new structure learning approach for Bayesian networks (BNs) based on asexual reproduction optimization (ARO) is proposed in this letter. ARO can be essentially considered as an evolutionary based algorithm that mathematically models the budding mechanism of asexual reproduction. In ARO, a parent produces a bud through a reproduction operator; thereafter the parent and its bud compete to survi...
متن کاملBayesian Estimation of Parameters in the Exponentiated Gumbel Distribution
Abstract: The Exponentiated Gumbel (EG) distribution has been proposed to capture some aspects of the data that the Gumbel distribution fails to specify. In this paper, we estimate the EG's parameters in the Bayesian framework. We consider a 2-level hierarchical structure for prior distribution. As the posterior distributions do not admit a closed form, we do an approximated inference by using ...
متن کاملLearning Bayesian Network Structure Using Genetic Algorithm with Consideration of the Node Ordering via Principal Component Analysis
‎The most challenging task in dealing with Bayesian networks is learning their structure‎. ‎Two classical approaches are often used for learning Bayesian network structure;‎ ‎Constraint-Based method and Score-and-Search-Based one‎. ‎But neither the first nor the second one are completely satisfactory‎. ‎Therefore the heuristic search such as Genetic Alg...
متن کامل